718 research outputs found

    Long-term observations of terminus position change, structural glaciology and velocity at Ninnis Glacier, George V Land, East Antarctica (1963-2021)

    Get PDF
    Over the last four decades, some major East Antarctic outlet glaciers have undergone rates of retreat, thinning and acceleration in response to ocean-climatic forcing. However, some major East Antarctic outlet glaciers remain unstudied in the recent past. Ninnis Glacier is one East Antarctic outlet glacier that is potentially vulnerable to future ocean-climate change and requires monitoring. This thesis quantifies and analyses long-term (1963-2021) changes in terminus position, structural glaciology and velocity at Ninnis Glacier. The results of this study show that Ninnis underwent three major calving events (in 1972-1974, 1998 and 2018), characterised by a 20ā€“25-year periodicity and indicative of a naturally occurring cycle. Each respective calving event created a large-scale tabular iceberg and formed a new terminus position at similar locations up-ice relative to Ninnisā€™ 1992 grounding line position. The major calving events in 1998 and 2018 were controlled by the development of a central rift system that appears in the same location on Ninnisā€™ tongue, reinforcing the notion of a predictable calving cycle. Ice flow velocity trends before the 2018 calving event (2017-2018) revealed no discernible change in velocity immediately up-ice (+0.2 %) and down-ice (>0 %) of the 1992 grounding line, suggesting that rifting took place within a ā€˜passiveā€™ sector of Ninnisā€™ ice tongue. Between 2018 and 2021, Ninnis underwent a pervasive deceleration up-ice (-2.1 %) and down-ice (-1.4 %) of the 1992 grounding line and on the distal ice tongue (-18.7 %). This indicated that the 2018 calving event did not result in the loss of dynamically important ice. Although Ninnis has previously been deemed a sector at risk of retreat, it is concluded that Ninnis is not currently undergoing Marine Ice Sheet Instability and is not currently sensitive to external forcing. This is consistent with low basal melt rates, negligible grounding line retreat and low thermal forcing temperatures in the coastal waters observed at Ninnis

    Marginalised Normal Regression: Unbiased curve fitting in the presence of x-errors

    Full text link
    The history of the seemingly simple problem of straight line fitting in the presence of both xx and yy errors has been fraught with misadventure, with statistically ad hoc and poorly tested methods abounding in the literature. The problem stems from the emergence of latent variables describing the "true" values of the independent variables, the priors on which have a significant impact on the regression result. By analytic calculation of maximum a posteriori values and biases, and comprehensive numerical mock tests, we assess the quality of possible priors. In the presence of intrinsic scatter, the only prior that we find to give reliably unbiased results in general is a mixture of one or more Gaussians with means and variances determined as part of the inference. We find that a single Gaussian is typically sufficient and dub this model Marginalised Normal Regression (MNR). We illustrate the necessity for MNR by comparing it to alternative methods on an important linear relation in cosmology, and extend it to nonlinear regression and an arbitrary covariance matrix linking xx and yy. We publicly release a Python/Jax implementation of MNR and its Gaussian mixture model extension that is coupled to Hamiltonian Monte Carlo for efficient sampling, which we call ROXY (Regression and Optimisation with X and Y errors).Comment: 14+6 pages, 9 figures; submitted to the Open Journal of Astrophysic

    Algebraic analysis of Trivium-like ciphers

    Get PDF
    Trivium is a bit-based stream cipher in the final portfolio of the eSTREAM project. In this paper, we apply the approach of Berbain et al. to Trivium-like ciphers and perform new algebraic analyses on them, namely Trivium and its reduced versions: Trivium-N, Bivium-A and Bivium-B. In doing so, we answer an open question in the literature. We demonstrate a new algebraic attack on Bivium-A. This attack requires less time and memory than previous techniques which use the F4 algorithm to recover Bivium-A's initial state. Though our attacks on Bivium-B, Trivium and Trivium-N are worse than exhaustive keysearch, the systems of equations which are constructed are smaller and less complex compared to previous algebraic analysis. Factors which can affect the complexity of our attack on Trivium-like ciphers are discussed in detail

    Demarcating Fringe Science for Policy

    Get PDF
    Here we try to characterize the fringe of science as opposed to the mainstream. We want to do this in order to provide some theory of the difference that can be used by policy-makers and other decision-makers but without violating the principles of what has been called ā€˜Wave Two of Science Studiesā€™. Therefore our demarcation criteria rest on differences in the forms of life of the two activities rather than questions of rationality or rightness; we try to show the ways in which the fringe differs from the mainstream in terms of the way they think about and practice the institution of science. Along the way we provide descriptions of fringe institutions and sciences and their outlets. We concentrate mostly on physics

    Exhaustive Symbolic Regression

    Full text link
    Symbolic Regression (SR) algorithms learn analytic expressions which both accurately fit data and, unlike traditional machine-learning approaches, are highly interpretable. Conventional SR suffers from two fundamental issues which we address in this work. First, since the number of possible equations grows exponentially with complexity, typical SR methods search the space stochastically and hence do not necessarily find the best function. In many cases, the target problems of SR are sufficiently simple that a brute-force approach is not only feasible, but desirable. Second, the criteria used to select the equation which optimally balances accuracy with simplicity have been variable and poorly motivated. To address these issues we introduce a new method for SR -- Exhaustive Symbolic Regression (ESR) -- which systematically and efficiently considers all possible equations and is therefore guaranteed to find not only the true optimum but also a complete function ranking. Utilising the minimum description length principle, we introduce a principled method for combining these preferences into a single objective statistic. To illustrate the power of ESR we apply it to a catalogue of cosmic chronometers and the Pantheon+ sample of supernovae to learn the Hubble rate as a function of redshift, finding āˆ¼\sim40 functions (out of 5.2 million considered) that fit the data more economically than the Friedmann equation. These low-redshift data therefore do not necessarily prefer a Ī›\LambdaCDM expansion history, and traditional SR algorithms that return only the Pareto-front, even if they found this successfully, would not locate Ī›\LambdaCDM. We make our code and full equation sets publicly available.Comment: 14 pages, 6 figures, 2 tables. Submitted to IEEE Transactions on Pattern Analysis and Machine Intelligenc

    Priors for symbolic regression

    Full text link
    When choosing between competing symbolic models for a data set, a human will naturally prefer the "simpler" expression or the one which more closely resembles equations previously seen in a similar context. This suggests a non-uniform prior on functions, which is, however, rarely considered within a symbolic regression (SR) framework. In this paper we develop methods to incorporate detailed prior information on both functions and their parameters into SR. Our prior on the structure of a function is based on a nn-gram language model, which is sensitive to the arrangement of operators relative to one another in addition to the frequency of occurrence of each operator. We also develop a formalism based on the Fractional Bayes Factor to treat numerical parameter priors in such a way that models may be fairly compared though the Bayesian evidence, and explicitly compare Bayesian, Minimum Description Length and heuristic methods for model selection. We demonstrate the performance of our priors relative to literature standards on benchmarks and a real-world dataset from the field of cosmology.Comment: 8+2 pages, 2 figures. Submitted to The Genetic and Evolutionary Computation Conference (GECCO) 2023 Workshop on Symbolic Regressio

    Simulated Clinical Encounters Using Patient-Operated mHealth: Experimental Study to Investigate Patient-Provider Communication

    Get PDF
    BACKGROUND: This study investigates patient-centered mobile health (mHealth) technology in terms of the secondary user experience (UX). Specifically, it examines how personal mobile technology, under patient control, can be used to improve patient-provider communication about the patient's health care during their first visit to a provider. Common ground, a theory about language use, is used as the theoretical basis to examine interactions. A novel concept of this study is that it is one of the first empirical studies to explore the relative meaningfulness of a secondary UX for specific health care tasks. OBJECTIVE: The objective of this study was to investigate the extent that patient-operated mHealth technology can be designed to improve the communication between the patient and provider during an initial face-to-face encounter. METHODS: The experimental study was conducted in 2 large Midwestern cities from February 2016 to May 2016. A custom-designed smartphone app prototype was used as the study treatment. The experimental design was posttest-only control group and included video-recorded simulated face-to-face clinical encounters in which an actor role-played a patient. Experienced clinicians consisting of doctors (n=4) and nurses (n=8) were the study participants. A thematic analysis of qualitative data was performed. Quantitative data collected from time on task measurements were analyzed using descriptive statistics. RESULTS: Three themes that represent how grounding manifested during the encounter, what it meant for communication during the encounter, and how it influenced the provider's perception of the patient emerged from the qualitative analysis. The descriptive statistics were important for inferring evidence of efficiency and effectiveness of communication for providers. Overall, encounter and task times averaged slightly faster in almost every instance for the treatment group than that in the control group. Common ground clearly was better in the treatment group, indicating that the idea of designing for the secondary UX to improve provider outcomes has merit. CONCLUSIONS: Combining the notions of common ground, human-computer interaction design, and smartphone technology resulted in a prototype that improved the efficiency and effectiveness of face-to-face collaboration for secondary users. The experimental study is one of the first studies to demonstrate that an investment in the secondary UX for high payoff tasks has value but that not all secondary UXs are meaningful for design. This observation is useful for prioritizing how resources should be applied when considering the secondary UX

    The Simplest Inflationary Potentials

    Full text link
    Inflation is a highly favoured theory for the early Universe. It is compatible with current observations of the cosmic microwave background and large scale structure and is a driver in the quest to detect primordial gravitational waves. It is also, given the current quality of the data, highly under-determined with a large number of candidate implementations. We use a new method in symbolic regression to generate all possible simple scalar field potentials for one of two possible basis sets of operators. Treating these as single-field, slow-roll inflationary models we then score them with an information-theoretic metric ("minimum description length") that quantifies their efficiency in compressing the information in the Planck data. We explore two possible priors on the parameter space of potentials, one related to the functions' structural complexity and one that uses a Katz back-off language model to prefer functions that may be theoretically motivated. This enables us to identify the inflaton potentials that optimally balance simplicity with accuracy at explaining the Planck data, which may subsequently find theoretical motivation. Our exploratory study opens the door to extraction of fundamental physics directly from data, and may be augmented with more refined theoretical priors in the quest for a complete understanding of the early Universe.Comment: 13+4 pages, 4 figures; submitted to Physical Review

    Comparison of random forest and parametric imputation models for imputing missing data using MICE: a CALIBER study.

    Get PDF
    Multivariate imputation by chained equations (MICE) is commonly used for imputing missing data in epidemiologic research. The "true" imputation model may contain nonlinearities which are not included in default imputation models. Random forest imputation is a machine learning technique which can accommodate nonlinearities and interactions and does not require a particular regression model to be specified. We compared parametric MICE with a random forest-based MICE algorithm in 2 simulation studies. The first study used 1,000 random samples of 2,000 persons drawn from the 10,128 stable angina patients in the CALIBER database (Cardiovascular Disease Research using Linked Bespoke Studies and Electronic Records; 2001-2010) with complete data on all covariates. Variables were artificially made "missing at random," and the bias and efficiency of parameter estimates obtained using different imputation methods were compared. Both MICE methods produced unbiased estimates of (log) hazard ratios, but random forest was more efficient and produced narrower confidence intervals. The second study used simulated data in which the partially observed variable depended on the fully observed variables in a nonlinear way. Parameter estimates were less biased using random forest MICE, and confidence interval coverage was better. This suggests that random forest imputation may be useful for imputing complex epidemiologic data sets in which some patients have missing data
    • ā€¦
    corecore